skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Sherman, Michelle"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Modern advances in unmanned aerial vehicle (UAV) technology have widened the scope of commercial and military applications. However, the increased dependency on wireless communications exposes UAVs to potential attacks and introduces new threats, especially from UAVs designed with the malicious intent of targeting vital infrastructures. Significant efforts have been made from researchers and other United States (U.S.) Department of Defense (DoD) agencies for developing countermeasures for detection, interception, or destruction of the malicious UAVs. One promising countermeasure is the use of a counter UAV (CUAV) swarm to detect, track, and neutralize the malicious UAV. This paper aims to recognize the state-of-the-art swarm intelligence algorithms for achieving cooperative capture of a mobile target UAV. The major design and implementation challenges for swarm control, algorithm architecture, and safety protocols are considered. A prime challenge for UAV swarms is a robust communication infrastructure to enable accurate data transfer between UAVs for efficient path planning. A multi-agent deep reinforcement learning approach is applied to train a group of CUAVs to intercept a faster malicious UAV, while avoiding collisions among other CUAVs and non-cooperating obstacles (i.e. other aerial objects maneuvering in the area). The impact of the latency incurred through UAV-to-UAV communications is showcased and discussed with preliminary numerical results. 
    more » « less
  2. In urban environments, tall buildings or structures can pose limits on the direct channel link between a base station (BS) and an Internet-of-Thing device (IoTD) for wireless communication. Unmanned aerial vehicles (UAVs) with a mounted reconfigurable intelligent surface (RIS), denoted as UAV-RIS, have been introduced in recent works to enhance the system throughput capacity by acting as a relay node between the BS and the IoTDs in wireless access networks. Uncoordinated UAVs or RIS phase shift elements will make unnecessary adjustments that can significantly impact the signal transmission to IoTDs in the area. The concept of age of information (AoI) is proposed in wireless network research to categorize the freshness of the received update message. To minimize the average sum of AoI (ASoA) in the network, two model-free deep reinforcement learning (DRL) approaches – Off-Policy Deep Q-Network (DQN) and On-Policy Proximal Policy Optimization (PPO) – are developed to solve the problem by jointly optimizing the RIS phase shift, the location of the UAV-RIS, and the IoTD transmission scheduling for large-scale IoT wireless networks. Analysis of loss functions and extensive simulations is performed to compare the stability and convergence performance of the two algorithms. The results reveal the superiority of the On-Policy approach, PPO, over the Off-Policy approach, DQN, in terms of stability, convergence speed, and under diverse environment settings 
    more » « less
  3. Deploying unmanned aerial vehicle (UAV) mounted base stations with a renewable energy charging infrastructure in a temporary event (e.g., sporadic hotspots for light reconnaissance mission or disaster-struck areas where regular power-grid is unavailable) provides a responsive and cost-effective solution for cellular networks. Nevertheless, the energy constraint incurred by renewable energy (e.g., solar panel) imposes new challenges on the recharging coordination. The amount of available energy at a charging station (CS) at any given time is variable depending on: the time of day, the location, sunlight availability, size and quality factor of the solar panels used, etc. Uncoordinated UAVs make redundant recharging attempts and result in severe quality of service (QoS) degradation. The system stability and lifetime depend on the coordination between the UAVs and available CSs. In this paper, we develop a reinforcement learning time-step based algorithm for the UAV recharging scheduling and coordination using a Q-Learning approach. The agent is considered a central controller of the UAVs in the system, which uses the ϵ -greedy based action selection. The goal of the algorithm is to maximize the average achieved throughput, reduce the number of recharging occurrences, and increase the life-span of the network. Extensive simulations based on experimentally validated UAV and charging energy models reveal that our approach exceeds the benchmark strategies by 381% in system duration, 47% reduction in the number of recharging occurrences, and achieved 66% of the performance in average throughput compared to a power-grid based infrastructure where there are no energy limitations on the CSs. 
    more » « less